When using SAP JCO 3.0 is it necessary to invoke BAPI_TRANSACTION_ROLLBACK? - sap

Is it necessary to invoke the BAPI_TRANSACTION_ROLLBACK or just JCoContext.end() will do an implicit rollback?

If the particular JCoContext.end() will end the stateful call sequence (which is not true for a nested context) then the respective underlying RFC connection will be reset. And this means that also an associated uncommitted LUW (Logical Unit of Work) will be canceled which results in an implicit rollback.
But if you know that a rollback is required then why not calling BAPI_TRANSACTION_ROLLBACK directly? I would prefer explicit operations instead of implicit assumptions. I guess this would make debugging and tracing easier as well.

Related

Sql Server 2014 sp_reset_connection behavior inside transaction

What is the behavior of sp_reset_connection when it is run inside a transaction? I am looking for clarification so I can decide if I need to restructure my code, or if I can leave it as is.
I have a c# application that uses an ORM(Petapoco) that doesn't seem to respect the ambient transaction(System.Transactions.TransactionScope). I can see the transaction beginning in Sql Profiler, but before every call in the transaction the ORM is sending a sp_reset_connection call. I am also seeing the Transaction Commit event in Sql Profiler when my ambient transaction is completed.
I have seen other questions on StackOverflow that suggest the Transaction Isolation scope is reset when sp_reset_connection is called (see: What does “exec sp_reset_connection” mean in Sql Server Profiler?), but I can't find any info on what happens specifically when sp_reset_connection is called inside of a transaction.
The sp_reset_connection stored procedure is not well documented since it's an internal stored procedure and never called directly. Be aware that it may change or be removed in the future without notice. That being said, it is useful to know a little about the internals for troubleshooting and correlating activity trace data.
The purpose of the proc is to restore connection environment state to that of a newly opened connection to support connection pooling. The client API specifies whether a connection is to be reset by setting either the RESETCONNECTION or RESETCONNECTIONSKIPTRAN bits of the TDS protocol status field when a request is issued on a newly reused connection from the pool. SQL Server internally invokes the sp_reset_connection RPC you see in a trace when either of those flags are set.
The TDS protocol documentation desribes these flags so we can infer this is what sp_reset_connection does under the covers. Below is the an excerpt from the documentation:
RESETCONNECTION (Introduced in TDS 7.1) (From client to server) Reset
this connection before processing event. Only set for event types
Batch, RPC, or Transaction Manager request. If clients want to set
this bit, it MUST be part of the first packet of the message. This
signals the server to clean up the environment state of the connection
back to the default environment setting, effectively simulating a
logout and a subsequent login, and provides server support for
connection pooling. This bit SHOULD be ignored if it is set in a
packet that is not the first packet of the message.
This status bit MUST NOT be set in conjunction with the
RESETCONNECTIONSKIPTRAN bit. Distributed transactions and isolation
levels will not be reset. 0x10
RESETCONNECTIONSKIPTRAN (Introduced in TDS 7.3) (From client to
server) Reset the connection before processing event but do not modify
the transaction state (the state will remain the same before and after
the reset). The transaction in the session can be a local transaction
that is started from the session or it can be a distributed
transaction in which the session is enlisted. This status bit MUST NOT
be set in conjunction with the RESETCONNECTION bit. Otherwise
identical to RESETCONNECTION.
The low-level client API (SqlClient here) indirectly controls the behavior of sp_reset_connection by setting those flags as needed. When a connection is reused outside a TransactionScope, the RESETCONNECTION flag is set as can be seen in this network packet trace (Wireshark):
Below is a trace of the first request issued on a reused connection from within the scope of a TransactionScope using block:
As you can see, the reset within TransactionScope will not rollback the transaction. Consequently, even if your ORM follows an open/execute/close pattern for data access, your outer TransactionScope transaction context will be honored. The transction will be committed as long as you invoke TransactionScope.Complete before exiting the transaction scope. You can include a TM Commit Tran completed event to your Profiler trace to see the commit when the TransactionScope exits.
BTW, Profiler/SQL trace is deprecated. Use Extended Events instead going forward. Also, as suggested in the other answer, be sure to specify the desired transaction isolation level (e.g. read committed) when you use TransactionScope. The default is serializable which can have side effects like unnecessary blocking and deadlocks. I suggest you use serializable only if you specifically need that strict level of isolation.

Can a NHibernate transaction be continued after an exception?

I am using NHibernate to save objects that require that one of the properties on these objects must be unique. The design of the system is such that it is possible that an attempt may be made occasionally to save the same object twice. This of course causes a violation of the uniqueness constraint and NHibernate throws an exception. The exception happens at the time I am attempting to save the object, rather than at Transaction.Commit() time. When this happens I want to simply catch the exception, discard the object and continue on saving other similar objects. However, I have not found a way to allow this to happen. Once that exception has happened I cannot carry on to save other objects and commit the transaction.
The only work-around I have found for this is to always check if the object exists first by running a query on the unique property. That works, but it seems unnecessarily expensive. I would like to avoid that extra hit to the db. Is there a way to do this?
Thanks
The issue, you've described, must be solved on a higher level then NHibernate session. Take a look at 9.8. Exception handling, extract:
If the ISession throws an exception you should immediately rollback
the transaction, call ISession.Close() and discard the ISession
instance. Certain methods of ISession will not leave the session in a
consistent state.
So, what I would suggest, wrap the call to your Data layer (DL) with some validation. Place the if/try logic outside of the Session.
Because even in case, that we are using versioning (see 5.1.7. version) (a very powerful way how to survive concurrency) ... we are provided with StaleExceptions and have to solve them outside of the DL

Can I use NHibernate's AdoNetTransactionFactory with distributed transactions?

I am dealing with a strange issue related to NHibernate and distributed transactions in a WCF service. See Deadlocks causing 'Server failed to resume the transaction' with NHibernate and distributed transactions for more details.
One thing that seems to solve my problem is using NHibernate's AdoNetTransactionFactory, instead of AdoNetWithDistributedTransactionsFactory.
I believe that the AdoNetWithDistributedTransactionsFactory is involved with making NHibernate's second-level caching mechanism work right, but we're not using that. What (if any) other problems exist with using AdoNetTransactionFactory with distributed transactions?
Thanks for your time!
I notice that you mentioned from your other question/answer:
SqlConnection class is not thread-safe, and that includes closing the connection
on a separate thread. Based on this response we have filed a
bug report for NHibernate.
However, from NHibernate's documentation:
11.2. Threads and connections
You should observe the following practices when creating NHibernate Sessions:
Never create more than one concurrent ISession or ITransaction instance per database connection.
Be extremely careful when creating more than one ISession per database per transaction. The ISession itself keeps track of updates made to loaded objects, so a different ISession might see stale data.
The ISession is not threadsafe! Never access the same ISession in two concurrent threads. An ISession is usually only a single unit-of-work!
If you are trying to multi-thread the connection with NHibernate perhaps it is just not going to work. Have you considered a different ORM such as Entity Framework?
No matter what ORM you choose though, the database connection will not be thread safe. This is universal.
"many DB drivers are not thread safe. Using a singleton means that if you have many threads, they will all share the same connection. The singleton pattern does not give you thread saftey. It merely allows many threads to easily share a "global" instance." - https://stackoverflow.com/a/6507820/1026459
Using AdoNetTransactionFactory with distributed system transactions will cause those transaction to be ignored by NHibernate, which has the following consequences:
ConnectionReleaseMode.AfterTransaction will not be honored. Instead, NHibernate will release the connection after each statement, and so will re-acquire a connection from the pool for the next one. Depending on your data provider, this may trigger escalation of the transaction to distributed.
FlushMode.Commit will not be honored. Explicit flushes will be required instead. (Auto flushes before queries may still occur.)
Works needing to be isolated from current system transaction will still be included inside it. (Unless the connection string Enlist property is false.) Such works may include id generators queries such as retrieving the next high value for a table hilo generator. If the transaction gets roll-backed, NHibernate may then use conflicting ids.
The NHibernate session will not be able to correctly track locks it holds on entities. Considering itself outside of a transaction, it will consider it has no lock on them. So it may try (on user code request by example) to re-lock them with lower lock level than the one the transaction already holds on them in database. Not sure what outcome could result of that. (At best, ignored, at worst...)
Second level cache will be disabled as soon as you start modifying data. NHibernate sort of "invalidate" cache entries in such situation, and re-enable them only on transaction completion, updated. But since it will not be aware of transactions...
Some extensions (maybe Envers) may rely on NHibernate transaction events, and will no more work as expected.
I strongly recommend upgrading to nhibernate 3.2(or a version close to it). Why? Since 2.1, there has been significant improvements (read rewrite) to the AdoNetWithDistributedTransactionFactory. Matter of fact, it now handles TransactionScopes/ambient-transactions and the like correctly. When we ran 2.1 in production we encounter many issues related to distributed transactions. We pretty much had to fix a ton of stuff ourselves and recompile NHibernate. 3.2 seems to have fixed many issues around the subject.
I don't have the source near me but, if memory doesn't fail me, the AdoNetTransactionFactory doesn't check/handle ambient transactions. So, you are down to NHibernate booting transactions when one is not present in the session(by means of ISession.BeginTransaction()).

NHibernate, ActiveRecord, Transaction database locks and when Commits are flushed

This is a common question, but the explanations found so far and observed behaviour are some way apart.
We have want the following nHibernate strategy in our MVC website:
A SessionScope for the request (to track changes)
An ActiveRecord.TransactonScope to wrap our inserts only (to enable rollback/commit of batch)
Selects to be outside a Transaction (to reduce extent of locks)
Delayed Flush of inserts (so that our insert/updates occur as a UoW at end of session)
Now currently we:
Don't get the implied transaction from the SessionScope (with FlushAction Auto or Never)
If we use ActiveRecord.TransactionScope there is no delayed flush and any contained selects are also caught up in a long-running transaction.
I'm wondering if it's because we have an old version of nHibernate (it was from trunk very near 2.0).
We just can't get the expected nHibernate behaviour, and performance sucks (using NHProf and SqlProfiler to monitor db locks).
Here's what we have tried since:
Written our own TransactionScope (inherits from ITransactionScope) that:
Opens a ActiveRecord.TransactionScope on the Commit, not in the ctor (delays transaction until needed)
Opens a 'SessionScope' in the ctor if none are available (as a guard)
Converted our ids to Guid from identity
This stopped the auto flush of insert/update outside of the Transaction (!)
Now we have the following application behaviour:
Request from MVC
SELECTs needed by services are fired, all outside a transaction
Repository.Add calls do not hit the db until scope.Commit is called in our Controllers
All INSERTs / UPDATEs occur wrapped inside a transaction as an atomic unit, with no SELECTs contained.
... But for some reason nHProf now != sqlProfiler (selects seems to happen in the db before nHProf reports it).
NOTE
Before I get flamed I realise the issues here, and know that the SELECTs aren't in the Transaction. That's the design. Some of our operations will contain the SELECTs (we now have a couple of our own TransactionScope implementations) in serialised transactions. The vast majority of our code does not need up-to-the-minute live data, and we have serialised workloads with individual operators.
ALSO
If anyone knows how to get an identity column (non-PK) refreshed post-insert without a need to manually refresh the entity, and in particular by using ActiveRecord markup (I think it's possible in nHibernate mapping files using a 'generated' attribute) please let me know!!

ORM Support for Handling Deadlocks

Do you know of any ORM tool that offers deadlock recovery? I know deadlocks are a bad thing but sometimes any system will suffer from it given the right amount of load. In Sql Server, the deadlock message says "Rerun the transaction" so I would suspect that rerunning a deadlock statement is a desirable feature on ORM's.
I don't know of any special ORM tool support for automatically rerunning transactions that failed because of deadlocks. However I don't think that a ORM makes dealing with locking/deadlocking issues very different. Firstly, you should analyze the root cause for your deadlocks, then redesign your transactions and queries in a way that deadlocks are avoided or at least reduced. There are lots of options for improvement, like choosing the right isolation level for (parts) of your transactions, using lock hints etc. This depends much more on your database system then on your ORM. Of course it helps if your ORM allows you to use stored procedures for some fine-tuned command etc.
If this doesn't help to avoid deadlocks completely, or you don't have the time to implement and test the real fix now, of course you could simply place a try/catch around your save/commit/persist or whatever call, check catched exceptions if they indicate that the failed transaction is a "deadlock victim", and then simply recall save/commit/persist after a few seconds sleeping. Waiting a few seconds is a good idea since deadlocks are often an indication that there is a temporary peak of transactions competing for the same resources, and rerunning the same transaction quickly again and again would probably make things even worse.
For the same reason you probably would wont to make sure that you only try once to rerun the same transaction.
In a real world scenario we once implemented this kind of workaround, and about 80% of the "deadlock victims" succeeded on the second go. But I strongly recommend to digg deeper to fix the actual reason for the deadlocking, because these problems usually increase exponentially with the number of users. Hope that helps.
Deadlocks are to be expected, and SQL Server seems to be worse off in this front than other database servers. First, you should try to minimize your deadlocks. Try using the SQL Server Profiler to figure out why its happening and what you can do about it. Next, configure your ORM to not read after making an update in the same transaction, if possible. Finally, after you've done that, if you happen to use Spring and Hibernate together, you can put in an interceptor to watch for this situation. Extend MethodInterceptor and place it in your Spring bean under interceptorNames. When the interceptor is run, use invocation.proceed() to execute the transaction. Catch any exceptions, and define a number of times you want to retry.
An o/r mapper can't detect this, as the deadlock is always occuring inside the DBMS, which could be caused by locks set by other threads or other apps even.
To be sure a piece of code doesn't create a deadlock, always use these rules:
- do fetching outside the transaction. So first fetch, then perform processing then perform DML statements like insert, delete and update
- every action inside a method or series of methods which contain / work with a transaction have to use the same connection to the database. This is required because for example write locks are ignored by statements executed over the same connection (as that same connection set the locks ;)).
Often, deadlocks occur because either code fetches data inside a transaction which causes a NEW connection to be opened (which has to wait for locks) or uses different connections for the statements in a transaction.
I had a quick look (no doubt you have too) and couldn't find anything suggesting that hibernate at least offers this. This is probably because ORMs consider this outside of the scope of the problem they are trying to solve.
If you are having issues with deadlocks certainly follow some of the suggestions posted here to try and resolve them. After that you just need to make sure all your database access code gets wrapped with something which can detect a deadlock and retry the transaction.
One system I worked on was based on “commands” that were then committed to the database when the user pressed save, it worked like this:
While(true)
start a database transaction
Foreach command to process
read data the command need into objects
update the object by calling the command.run method
EndForeach
Save the objects to the database
If not deadlock
commit the database transaction
we are done
Else
abort the database transaction
log deadlock and try again
EndIf
EndWhile
You may be able to do something like with any ORM; we used an in house data access system, as ORM were too new at the time.
We run the commands outside of a transaction while the user was interacting with the system. Then rerun them as above (when you use did a "save") to cope with changes other people have made. As we already had a good ideal of the rows the command would change, we could even use locking hints or “select for update” to take out all the write locks we needed at the start of the transaction. (We shorted the set of rows to be updated to reduce the number of deadlocks even more)