receive as pick branch trigger does not fire - wcf

I have a WF4 Service with a flowchart as the root activity. It contains multiple correlated receive activites and decision branching to step through an approval process. The receive activities work perfectly until I try and use one as the trigger for a pick branch.
I am running tracking so can see that the receive is opened and in the persistance I can see the associated bookmark. When I send a client message with the receive type it does not trigger. I have a delay pick branch that fires OK but then the subsequent receive also does not work.
I have checked these receive activities individually and they work OK when not used as the pick trigger. I have tried the pick within a Sequence and a While but no difference.
I cannot see any difference between my implementation and may examples on the web. Am I missing something extra required when the receive is encapsulated by a pick branch?

There is nothing special about a PickBranch trigger that would cause a receive to behave differently so I suspect it is something with the Receive itself. What kind of errors are you seeing at the client application?

Related

How to prevent NServiceBus from not sending messages on errors

I'm new to NServiceBus, so maybe I'm asking something pretty silly here, but is there a way to make NServiceBus not stop sending any messages that are sent in response to a message whose handler fails?
Let me explain with a simple example.
Suppose I have an OrderPaidEvent that has a handler that does the following:
Look for the customer
Start a DB transaction
Update the customer to a good customer
Send an CustomerUpgradedToGoodCustomerEvent message
Commit the DB transaction
Fairly straightforward, all is well in the world. Now a few months later someone else figures that an email would be nice when an order is paid and thus adds another handler to the OrderPaidEvent to send an email.
Unfortunately, now whenever the mailserver has an issue, this second handler will fail with an error which will however prevent the original CustomerUpgradedToGoodCustomerEvent message from being sent (step 4). But because the DB transaction was already committed (step 5) the customer has already been upgraded to a good customer in the database.
This means that even if the OrderPaidEvent handler is retried the customer no longer changes and thus the CustomerUpgradedToGoodCustomerEvent message is never sent. Worse yet, this is all because of a change to the code that has nothing to do with the original message handler and will thus be difficult to detect.
This seems like a massive flaw and since I'm new to this I'm certain there's something I'm doing wrong, but I can't seem to figure out what it is.
Any help from you fine people would be great.
Thanks in advance.
How about breaking down your procedural code into separate handlers?
Thereafter each logical operation will either be done or will not be done based on successful completion of each granular task.
If you add a Saga to the mix then you can make business decisions based on the completed steps in your Saga.
Also maybe read more about transactions and NServiceBus here
First of all I would send out the CustomerUpgradedToGoodCustomerEvent after the commit. At that point you are sure that the event actually took place.
And in response to your question: You could handle the email in some 'SendEmail' command that is raised after the db commit and before the event is published. If that command fails it will not hurt the handling of the OrderPaid event. When mail is up again, the command can be retried and handled normally.

AXON framework synchronous response

I am new to AXON framework and are using it for our development. We have a requirement where command (command side) is created for the persisting data, for the same event is triggered which is consumed at query side. Now we need to have a response back to command side from query side which says if the record is persisted into database successfully (custom successful message) or if failed then the reason of the failure (custom exception message as response). Kindly help if there is any way to achieve such scenario.
Here command side and query side are 2 different micro-services and we are using Rabbit Mq for event driven technique.
Thanks in advance
I think that what you are asking is if there is a way for the command and event to be processed in a single transaction?
If you use a subscribing event processor, running in the same JVM, the event is processed synchronously and the whole transaction is rolled back in case of an exception in an event handler. This is not the case here, because you have loosely coupled separate services, which is good.
It's best practice for the aggregate with the command handler to have all the information available to decide whether or not the command can successfully be processed, and when an event is applied, this is a signal that it has happened, and the other services (the query side in this case) have to be informed. It's not good practice for a query module to overrule this ("you say it happened, I say it didn't"). If there is an error in the query side, you fix it, and replay the event.
If it really is an error in the event handler that the whole system must know about, that is really a separate event. You can apply such an event directly on the event bus and notify the whole system. Something like this:
#Autowired
private EventBus eventBus;
(...)
CatastrophicFailureEvent failureEvent = new CatastrophicFailureEvent("OH NO!");
eventBus.publish(GenericEventMessage.asEventMessage(failureEvent));
I think you might need to reconsider your architecture. Keep in mind that events should encapsulate the irreversible state changes of your system. These state changes should not be questioned after they have happened. Your query side should only need to care about projecting these valid state changes that your command side has decided on.
If you need to check whether a user already existed, you need to do this on the command side in your aggregate. The aggregate can keep a list of all the existing usernames and throw an exception if an invalid command is given. The command response (tip: using the sendAndWait() method on the CommandGateway returns a response) can then be used as the system to inform your user about the success/failure of its action.
The following flow might solve your problem, but keep in mind that the user will get a callback on the success of the action even though the query side might not have processed its result yet. This part is eventually consistent.
Command Side:
Request from frontend handled by a Controller class and creates an corresponding command
The above command is invoked and handled by a command handler which creates the corresponding event or throws an exception if the user already exists.
The invoker of the command is informed about the success of the command or the exception is handled and the error shown to the user.
The above event is published through rabbit mq event bus if the command was successful.
Query side:
The event that is published in the step 4 is consumed by the event handler in query side. No checks or validations should be necessary, since they were already handled on the command side.
#Mzzl
Series of activities
Command Side:
1. Request from frontend handled by a Controller class and creates an corresponding command
2. The above command is invoked and handled by a command handler which in return create corresponding event
3. The above event is then published through rabbit mq event bus.
Query Side:
4. The event that is published in the step 3 is consumed by the event handler in query side.
5. The event handler has the logic to perform db transaction (lets assume add a user). Once a user is added then a success message or failure message (lets assume user already available in the DB so could not create duplicate entry) should flow from query side to command side and eventually back to UI as a repsonse.
I'm not sure I've fully understand your issue (especially the microservice part :)),
but if your problem is related to having the query side up to date after the command execution, then you can have a look at this project.
In this example, you can see that he uses a SubscriptionQueryResult in conjunction with a QueryUpdateEmitter (see here)
Basically you will subscribe to query side changes before the command is issued, and you will block after the command execution until the query side send a notification when it is up to date.
This way you can avoid the eventual consistency.

RabbitMQ+MassTransit: how to cancel queued message from processing?

In some exceptional situations I need somehow to tell consumer on receiving point that some messages shouldn’t be processed. Otherwise two systems will become out-of-sync (we deal with some outdates external systems, and if, for example, connection is dropped we have to discard all queued operations in scope of that connection).
Take a risk and resolve problem messages manually? Compensation actions (that could be tough to support in my case)? Anything else?
There are a few ways:
You can set a time-to-live when sending a message: await endpoint.Send(myMessage, c => c.TimeToLive = TimeSpan.FromHours(1));, but this will apply to all messages that are sent (or published) like this. I would consider this, after looking at your requirements. This is technical, but it is a proper messaging pattern.
Make TTL and generation timestamp properties of your message itself and let the consumer decide if the message is still worth processing. This is more business and, probably, the most correct way.
Combine tech and business - keep the timestamp and TTL in message headers so they don't pollute your message contracts, and filter them out using a custom middleware. In this case, you need to be careful to log such drops so you won't be left wonder why messages disappear now and then.
Almost any unreliable integration can be monitored using sagas, with timeouts. For example, we use a saga to integrate with Twilio. Since we have no ability to open a webhook for them, we poll after some interval to check the message status. You can start a saga when you get a message and schedule a message to check if the processing is still waiting. As discussed in comments, you can either use the "human intervention required" way to fix the issue or let the saga decide to drop the message.
A similar way could be to use a lookup table, where you put the list of messages that aren't relevant for processing. Such a table would be similar to the list of sagas. It seems that this way would also require scheduling. Both here, and for the saga, I'd recommend using a separate receive endpoint (a queue) for the DropIt message, with only one consumer. It would prevent DropIt messages from getting stuck behind the integration messages that are waiting to be processed (and some should be already dropped)
Use RMQ management API to remove messages from the queue. This is the worst method, I won't recommend it.
From what I understand, you're building a system that sends messages to 3rd party systems. In other words, systems you don't control. It has an API but compensating actions aren't always possible, because the API doesn't provide it or because actions are performed inside the 3rd party system that can't be compensated or rolled back?
If possible try to solve this via sagas. Make sure the saga executes the different steps (the sending of messages) in the right order. So that messages that cannot be compensated are sent last. This way message that can be compensated if they fail, will be compensated by the saga. The ones that cannot be compensated should be sent last, when you're as sure as possible that they don't have to be compensated. Because that last message is the last step in synchronizing all systems.
All in all this is one of the problems with distributed systems, keeping everything in sync. Compensating actions is the way to deal with this. If compensating actions aren't possible, you're in a very difficult situation. Try to see if the business can help by becoming more flexible and accepting that you need to compensate things, where they'll tell you it's not possible.
In some exceptional situations I need somehow to tell consumer on receiving point that some messages shouldn’t be processed.
Can't you revert this into:
Tell the consumer that an earlier message can be processed.
This way you can easily turn this in a state machine (like a saga) that acts on two messages. If the 2nd message never arrives then you can discard the 1st after a while or do something else.
The strategy here is to halt/wait until certain that no actions need to be reverted.

nservicebus calling a Saga from within another Saga

I'm new to NServiceBus and trying to find the best way to model a scenario which uses compensating transactions.
For example, say I have a typical BookHotel scenario:
In the happy case, the messaging flow would proceed as follows:
BookHotelCommand --> BookHotelSaga
BookFlightCommand --> Reply IFlightBookedMessage
BookRentalCommand --> Reply IRentalBookedMessage
ReplyToOriginator --> HotelBookedMessage
How would I model compensating transactions in the above flow? I was initially thinking of calling a "UnbookHotelSaga" in one of the replies above, based on some business conditions. However, I seem to be running into some challenges with getting this working. Can someone with Saga experience comment if this is the right approach.
Here is the scenario I was thinking would work by calling another Saga:
BookHotelCommand --> BookHotelSaga
BookFlightCommand --> Reply IFlightBookedMessage
BookRentalCommand --> (condition satisfied) --> UnbookHotelCommand --> UnbookHotelSaga
UnbookRentalCommand --> Reply IUnbookRentalMessage
UnbookFlightCommand --> Reply IUnbookFlightMessage
UnbookHotelCommand --> ReplyToOriginator --> UnbookedHotelMessage
Can someone please advise on the best-practices approach to implementing compensating transactions?
I'm not really sure I understand the long running process and what it should do. Some more information on functionality would probably help.
One of the first things I noticed was mentioning of IUnbookRentalMessage. First of all, don't use I at the start of messages. The fact that they can be interfaces, has to do with polymorphism and multiple inheritance features of .NET. Messages themselves have no technical meaning on the wire and you should therefore not include the I.
Also, commands are in imperative tense and events in past tense. So BookFlight for a command and FlightBooked for an event.
You could theoretically create multiple sagas that all take part in a single long running business process. A saga called BookingPolicy or BookingProcess or BookingSaga to orchestrate the entire process. And FlightBookingPolicy for the flight and HotelBookingPolicy for the hotel.
If you start out with a BookFlight command, the FlightBookingPolicy could publish an event called FlightBooked. The BookingPolicy could use that event to start its own instance of the saga. So for example, the (ASP.NET) website that sends all the commands, would not have to know about the BookingPolicy. It just sends the appropriate commands with the appropriate data. The same goes for hotel, car, etc.
Then at some point, the website sends a CommitBooking or FinishUpMyVacation command, which does arrive at the BookingPolicy saga and that finalizes the entire booking. It sends an event BookingFinishingUp or something. Based on that event, some handler might deduct money from a creditcard. Another handler does integration with 3rd parties to actually submit the vacation. Another handler sends out emails. Etcetera.
Finally when the BookingPolicy (or even another saga) is finished, the BookingPolicy saga will publish an event called BookingFinished and the appropriate FlightBookingPolicy and HotelBookingPolicy and CarBookingPolicy also wrap up and end their work. Whatever that may be.
Does that make sense? If you want, you can also continue the conversation on https://discuss.particular.net/ or support#particular.net.

Grails test JMS messaging

I've got a JMS messaging system implemented with two queues. One is used as a standard queue second is an error queue.
This system was implemented to handle database concurrency in my application. Basically, there are users and users have assets. One user can interact with another user and as a result of this interaction their assets can change. One user can interact with single user at once, so they cannot start another interaction before the first one finishes. However, one user can be in interaction with other users multiple times [as long as they started the interaction].
What I did was: crated an "interaction registry" in redis, where I store the ID of users who begin an interaction. During interaction I gather all changes that should be made to the second user's assets, and after interaction is finished I send those changes to the queue [user who has started the interaction is saved within the original transaction]. After the interaction is finished I clear the ID from registry in redis.
Listener of my queue will receive a message with information about changes to the user that need to be done. Listener will get all objects which require a change from the database and update it. Listener will check before each update if there is an interaction started by the user being updated. If there is - listener will rollback the transaction and put the message back on the queue. However, if there's something else wrong, message will be put on to the error queue and will be retried several times before it is logged and marked as failed. Phew.
Now I'm at the point where I need to create a proper integration test, so that I make sure no future changes will screw this up.
Positive testing is easy, unfortunately I have to test scenarios, where during updates there's an OptimisticLockFailureException, my own UserInteractingException & some other exceptions [catch (Exception e) that is].
I can simulate my UserInteractingException by creating a payload with hundreds of objects to be updated by the listener and changing one of it in the test. Same thing with OptimisticLockFailureException. But I have no idea how to simulate something else [I can't even think of what could it be].
Also, this testing scenario based on a fluke [well, chance that presented scenario will not trigger an error is very low] is not something I like. I would like to have something more concrete.
Is there any other, good, way to test this scenarios?
Thanks
I did as I described in the original question and it seems to work fine.
Any delays I can test with camel.