NHibernate 3 - TransactionScope vs. NHibernate transactions - nhibernate

I need to choose between TransactionScope or NHibernate transactions for my new project.
What is better? When should use TransactionScope? NHibernate transactions?

They are different things.
You should always do your work inside a NHibernate transaction.
You can use TransactionScope as needed, for example. to use distributed transactions when there's more than one session involved.
NHibernate transactions will automatically enlist in distributed transactions, but they won't be created automatically, so the recommended pattern is: if you have a TransactionScope, open the NH transaction inside it.

Transaction management is exposed to the application developer via the NHibernate
ITransaction interface. You aren’t forced to use this API—NHibernate lets you
control ADO.NET transactions directly.

Related

Run SQL without transaction

Is there a way how to execute SQL or stored procedure without creating additional transaction in entity framework ? There is solution for entity framework Stored Procedure without transaction in Entity Framework but it is not available for .net core.
The default behavior of ExecuteSqlCommand in EF Core is different than the EF6:
Note that this method does not start a transaction. To use this method with a transaction, first call BeginTransaction(DatabaseFacade, IsolationLevel) or UseTransaction(DatabaseFacade, DbTransaction).
Note that the current ExecutionStrategy is not used by this method since the SQL may not be idempotent and does not run in a transaction. An ExecutionStrategy can be used explicitly, making sure to also use a transaction if the SQL is not idempotent.
In other words, what are you asking is the default behavior in EF Core, so no action is needed.

JPA reading data with NO LOCK

In the application we're writing, it is required that we use the with(NOLOCK) in our queries. Just so that the queries don't take so long to process.
I haven't found anything on how to accomplish this. What I did find is how to enable optimistic or pessimistic locking, but as far as I know, that's for writing data, not reading.
Is there a way to do this?
We are using JPA and the Criteria API connecting to a MSSQL server and the application server is Glassfish 4.
Erates
The with(NOLOCK) behaviour is very simmilar to working in the READ_UNCOMMITED transaction isolation level, as it is explained here. Given that, you can achieve what you want by using a DB connection that is configured in that transaction level. If you want to decide during the execution what transaction level to use, simple get the underlying connection and change the transaction isolation level (after that you should change it back to the original level).
If you use the with(NOLLOCK) feature for a different goal to avoid some bugs, then you will have to write native queries for that.
The equivalent of WITH (NOLOCK) in JPA is to use READ_UNCOMMITTED isolation level.
#Transactional(isolation = Isolation.READ_UNCOMMITTED)
The right solution of your task is using of optimistic locking, which enabled in main JPA providers by default. In short: you must nothing to do for reading data from the database without locking. On other hand JPA provides locking whole table row in through database row locking mechanism (typically) when pessimistic mode is enabled. For more info look at link

EF and TransactionScope for both SQL Server and Oracle without escalating/spanning to DTC?

Can anyone update me on this topic?
I want to support both SQL Server and Oracle in my application.
Is it possible to have the following code (in BL) working for both SQL Server and Oracle without escalating/spanning to distributed transactions (DTC) ?
// dbcontext is created before, same dbcontext will be used by both repositories
using (var ts = new TransactionScope())
{
// create order - make use of dbcontext, possibly to call SaveChanges here
orderRepository.CreateOrder(order);
// update inventory - make use of same dbcontext, possibly to call SaveChanges here
inventoryRepository.UpdateInventory(inventory);
ts.Complete();
}
As of today, end of August 2013, I understand that it works for SQL Server 2008+ ... but what about Oracle? I found this thread... it looks like for Oracle is promoting to distributed transactions but is still not clear to me.
Does anyone have experience with writing apps to support both SQL Server and Oracle with Entity Framework to enlighten me?
Thanks!
Update: Finally I noticed EF6 comes with Improved Transaction Support. This, in addition to Remus' recommendations could be the solution for me.
First: never use var ts = new TransactionScope(). Is the one liner that kills your app. Always use the explicit constructor that let you specify the isolation level. See using new TransactionScope() Considered Harmful.
Now about your question: the logic not to promote two connections in the same scope into DTC relies heavily on the driver/providers cooperating to inform the System.Transactions that the two distinct connections are capable of managing the distributed transaction just fine on their own because the resource managers involved is the same. SqlClient post SQL Server 2008 is a driver that is capable of doing this logic. The Oracle driver you use is not (and I'm not aware of any version that is, btw).
Ultimately is really really really basic: if you do not want a DTC, do not create one! Make sure you use exactly one connection in the scope. It is clearly arguable that you do not need two connections. In other words, get rid of the two separate repositories in your data model. Use only one repository for Orders, Inventory and what else what not. You are shooting yourself in the foot with them and you're asking for pixie dust solutions.
Update: Oracle driver 12c r1:
"Transaction and connection association: ODP.NET connections, by default, detach from transactions only when connection objects are closed or transaction objects are disposed"
Nope, DTC is needed for distributed transactions - and something spanning 2 different database technologies like this is a distributed transaction. Sorry!

Difference between a hibernate transaction and a database transaction done using sql queries?

Is there a difference between the two?
For example within a hibernate transaction we can access the database, run some java code and then access the database again. We can't do that within a transaction done via SQL can we? Is this the difference?
The 2 directly relate to each other - a Hibernate transaction maps to and controls the JDBC (database) transaction.
You can do the same thing with direct JDBC / SQL, without Hibernate - though you'll need to call Connection.setAutoCommit(false) to get started. Otherwise, by default, a commit is called after each statement - making each statement run in its own transaction.
Some additional details are available at http://docs.oracle.com/javase/tutorial/jdbc/basics/transactions.html.

Should web applications use explicit SQL transactions?

Consider a regular web application doing mostly form-based CRUD operations over SQL database. Should there be explicit transaction management in such web application? Or should it simply use autocommit mode? And if doing transactions, is "transaction per request" sufficient?
I would only use explicit transactions when you're doing things that are actually transactional, e.g., issuing several SQL commands that are highly interrelated. I guess the classic example of this is a banking application -- withdrawing money from one account and depositing it in another account must always succeeed or fail as a batch, otherwise someone gets ripped off!
We use transactions on SO, but only sparingly. Most of our database updates are standalone and atomic. Very few have the properties of the banking example above.
I strongly recommend using transaction mode to safe data integrity because autocommit mode can cause partial data saving.
This is usually handled for me at the database interface layer - The web application rarely calls multiple stored procedures within a transaction. It usually calls a single stored procedure which manages the entire transaction, so the web application only needs to worry about whether it fails.
Usually the web application is not allowed access to other things (tables, views, internal stored procedures) which could allow the database to be in an invalid state if they were attempted without being wrapped in a transaction initiated at the connection level by the client prior to their calls.
There are exceptions to this where a transaction is initiated by the web application, but they are generally few and far between.
You should use transactions given that different users will be hitting the database at the same time. I would recommend you do not use autocommit. Use explicit transaction brackets. As to the resolution of each transaction, you should bracket a particular unit of work (whatever that means in your context).
You might also want to look into the different transaction isolation levels that your SQL database supports. They will offer a range of behaviours in terms of what reading users see of partially updated records.
It depends on how is the CRUD handling done, if and only if all creations and modifications of model instances is made in a single update or insert query, you can use autocommit.
If you are dealing with CRUD in multiple queries mode (a bad idea, IMO) then you certainly should define transactions explicitly, as these queries would certainly be 'transactionally related', you won't want to end with a half model in your database. This is relevant because some web frameworks tend to do things the 'multiple query' way for various reasons.
As for which transaction mode to use it depends on what you can support in terms of data views (ie, how current the data needs to be when seen by clients) and what you'll have to support in terms of performance.
it is better to insert/update into multiple tables in a single stored procedure. That way, there is no need to manage transactions from the web application.