#MessageDriven transactions and redelivery semantics - glassfish

What's the best way to accomplish the following?
#MessageDriven bean does some work on database
on failure, I want to roll back the DB transaction
but I also want the JMS message NOT to be redelivered, i.e., don't re-try.
I can think of a few ways that might work. Are there any others, and which is the best?
use #TransactionManagement(type=BEAN) and UserTransaction, and explicitly roll back after catching exception. e.g.:
catch (Exception e) {
e.printStackTrace();
utx.rollback();
}
Use container-managed transactions, specify #TransactionAttribute(value=NOT_SUPPORTED) on onMessage and then delegate DB activity to a separate method with #TransactionAttribute(value=REQUIRED).
Leave the transaction handling alone and re-configure the retry property in the server. I'm using Glassfish 3.1.1, and I'm not exactly sure how to set this.
Leave everything alone and explicitly check the message for re-delivery in the onMessage body, and exit out if re-delivered. (message.getJMSRedelivered()?)
What has worked well out there? Is there a standard/best-practice way for dealing with this?

Simplest and most portable approach is use #TransactionAttribute(value=NOT_SUPPORTED) on onMessage() as you state and to move the DB work to another bean with #TransactionAttribute(REQUIRES_NEW)
Be careful about the separate method approach as this will not work. In a JMS MDB the onMessage() method is the only method where #TransactionAttribute can be used.

Personally I never do any work in the MDB, but dispatch immediately to an (injected) session bean.
This bean then does the DB work. It either starts a new transaction, or I catch any exception from the bean and log it (but don't let it propogate, so no redelivery).
This also has the advantage that the business logic is easly reusable from other places.

Related

Why in mulesoft "on Error Propagate" rethrows the same error?

I am new to mulesoft and while studing it i strugle to understand why on module "onErrorPropagate" the error is being rethrown after executing the scope.
can you explain the benefits?
An on-error-propagate will rollback any transactions, execute, and use that result to rethrow the existing error––meaning its owner will be considered as “failing.”
The best use is in a layered system, to allow each layer to do its own small part of an error response.
If you are familiar with Java you can think of it as catching the exception and re-throwing it. For example, sometimes you want to do something with the error yourself, but still want to propagate it upwards for higher levels to deal with.
You could add logging in a specific flow for the error but then leave it to the parent flows to actually deal with the exception.
The "onErrorPropagate" propagates (rethrows) the error to the parent flow (or the global error handler if it's already reached the main flow).
This can be usefull in a few cases.
Say you have some flow specific error handling (e.g. if something goes wrong, set a default payload).
Then you propagate this error to the next level where you have your Global error handler that, say, stores some info in a QA database.
You don't want to have that database connector in every single error handler.
In this way you can achieve a 'java inherritance' like structure, for your errors.
Side note: if you want your error to only be handled and do nothing further, you can use "onErrorContinue"

ctx.commandFailed vs throwing in PersistentEntity

In the Auction Example I have seen both ctx.commandFailed(...) and throw SomeException(...). Is there a good reason to throw instead of using the API and is there a difference between the two?
Persistent entity command handlers and after persist callbacks are wrapped in try/catch blocks, if an exception is caught, it will pass that exception to ctx.commandFailed(...) for you.
There is a subtle difference between the two to be aware of. If you throw an exception, processing of the command will of course stop immediately. If however you pass an exception to ctx.commandFailed(...), that will send the exception back to the invoker of the command, however it won't stop processing. You could in theory go on to return some directives to persist events - which would be an odd thing to do. In practice what you need to do is return ctx.done after invoking ctx.commandFailed(...).
In general it's probably simpler and safer to simply throw the exception.

NServiceBus UnitOfWork to swallow certain exceptions and avoid message failure

I have an interesting use case where certain exception types mean "This message is no longer valid and should be ignored" but this code doesn't have any awareness of the Bus in order to call Bus.DoNotContinueDispatchingCurrentMessageToHandlers().
I loathe boilerplate code like try/catch blocks that need to be present in every single message handler. So I started implementing a UnitOfWork to handle and swallow the exception, but I can't find a way to tell the framework that "Yes, this code generated an exception, but forget about that and just complete the transaction."
Bus.DoNotContinueDispatchingCurrentMessageToHandlers() does not work. I tried having an ITransport injected and calling AbortHandlingCurrentMessage() but that caused the entire universe to blow up. Even stepping through the source code I seem to be at a loss.
Note that it very well may be that this is a horrible idea, because faking that there is no exception when there is in fact an exceptional case would cause the transaction to commit, causing who knows how many bad unknown side effects. So it would be preferable to have a method that still rolls back the transaction but discards the message. But I would be interested in a potential "Yes I know what I'm doing, commit the transaction regardless of the exception" option as well.
As of NServiceBus version 4.4 you can control this by injecting a behavior into our handler pipeline.
This let's you control which exceptions to mute.
class MyExceptionFilteringBehavior : IBehavior<HandlerInvocationContext>
{
public void Invoke(HandlerInvocationContext context, Action next)
{
try
{
//invoke the handler/rest of the pipeline
next();
}
//catch specific exceptions or
catch (Exception ex)
{
//modify this to your liking
if (ex.Message == "Lets filter on this text")
return;
throw;
}
}
There are several samples of how this works:
http://docs.particular.net/samples/pipeline/
That said I totally agree with Ramon that this trick should only be used if you can't change to design to avoid this.
A dirty solution would be having a unit of work test the exception, put the message id in a shared 'ignore' bag (concurrent dictionary in memory, db, what works for you) , let it fail so that everything is rolled back, in the retry have a generic message handler compare the message ID and let that call Bus.DoNotContinueDispatchingCurrentMessageToHandlers()
If you do not want to work with a unit of work then you could try to use the AppDomain.FirstChanceException.
I wouldn't advice any of these as good solution :-)
Why would you like to 'swallow' unhandled exceptions?
If you want to ignore an exception then you should catch these in the handler and then just return and log this.
What I'm more interested in is what about state? You maybe have already writen to a store. Shouldn't these writes be rolled back? If you swallow an exception the transaction commits.
It seems to me you are running in a kind of 'at least once delivery' environment. THen you need to store some kind of message id somewhere.
Or is it an action initiated by several actors based on a stale state? In that case you need to have first/last write wins construction that just ignores a command based on a stale item.
If you handl an event then swallowing a exception seems not correct. They usually say that you can ignore commands but that you always have to handle events.
Shouldn't this be part of validation? If you validate a command then you can decide to ignore it.

Monorail, nhibernate and session-per-request pattern

I need some insights and thoughts about a refactoring I'm about to do to our web-app.
We initially used the session-per-request pattern with NHibernate and ActiveRecord by using the On_BeginRequest / On_EndRequest in the HttpApplication to create and dispose the session. Later on, we realized that any DB-related exceptions got thrown outside of our monorail-context, meaning that our rescues didn't kick in. As another sideeffect, we didn't have the option to fully skip creation of NHibernate sessions in any action, which in some cases would be desirable.
So we rewrote it to create sessions in Initialize() / Contextualize() in our base controller, and disposed them in the Dispose() of our base controller. We also Rollback the session in our rescue controller to prevent any half written changes to the DB. So far, so good. The reason for doing it in the Dispose() is because we want it to live through the view-rendering, because of lazy-loading reasons aswell as viewcomponents that needs to get a session (we could switch to units-of-work for the viewcomponents, but they don't seem to have a Dispose()...)
However, I'm experiencing some deadlock issues where we have started transacations in the DB that isn't getting rollbacked nor committed and I can't get my head around it, mostly because of the mess we've made with this approach...
So I found this article: http://hackingon.net/post/NHibernate-Session-Per-Request-with-ASPNET-MVC.aspx
And I thought, "Filters, we can use that in MonoRail too!", because it can kick in on BeforeAction and AfterRendering.
My questions then are:
What happens if an Exception occurs in the filter?
Will AfterRendering fire even if an Exception occurs in the action or the rendering?
Would you recommend this approach, if not, what are your suggestion instead?
Any pointers are very much appreciated!
You need an application error handler to care of exception handling.
Attach a debugger and find out.
Probably not (even though it is my article). It doesn't work with RenderAction. Best to use an IoC container to control the lifetime of connections.

How to ensure that NHibernate is inside a transaction when saving, updating or deleting?

I'd like to ensure, that when I'm persisting any data to the database, using Fluent NHibernate, the operations are executed inside a transaction. Is there any way of checking that a transaction is active via an interceptor? Or any other eventing mechanism?
More specifically, I'm using the System.Transaction.TransactionScope for transaction management, and just want to stop myself from not using it.
If you had one place in your code that built your session, you could start the transaction there and fix the problem at a stroke.
I haven't tried this, but I think you could create a listener implementing IFlushEventListener. Something like:
public void OnFlush(FlushEvent #event)
{
if (!#event.Session.Transaction.IsActive)
{
throw new Exception("Flushing session without an active transaction!");
}
}
It's not clear to me (and Google didn't help) exactly when OnFlush is called. There also may be an implicit transaction that could set IsActive to true.
If you had been using Spring.Net for your transaction handling, you could use an anonymous inner object to ensure that your DAOs/ServiceLayer Objects are always exposed with a TransactionAdvice around their service methods.
See the spring documentation for an example.
To what end? NHProf will give you warnings if you're not executing inside a transaction. Generally you should be developing with this tool open anyway...