In the application we're writing, it is required that we use the with(NOLOCK) in our queries. Just so that the queries don't take so long to process.
I haven't found anything on how to accomplish this. What I did find is how to enable optimistic or pessimistic locking, but as far as I know, that's for writing data, not reading.
Is there a way to do this?
We are using JPA and the Criteria API connecting to a MSSQL server and the application server is Glassfish 4.
Erates
The with(NOLOCK) behaviour is very simmilar to working in the READ_UNCOMMITED transaction isolation level, as it is explained here. Given that, you can achieve what you want by using a DB connection that is configured in that transaction level. If you want to decide during the execution what transaction level to use, simple get the underlying connection and change the transaction isolation level (after that you should change it back to the original level).
If you use the with(NOLLOCK) feature for a different goal to avoid some bugs, then you will have to write native queries for that.
The equivalent of WITH (NOLOCK) in JPA is to use READ_UNCOMMITTED isolation level.
#Transactional(isolation = Isolation.READ_UNCOMMITTED)
The right solution of your task is using of optimistic locking, which enabled in main JPA providers by default. In short: you must nothing to do for reading data from the database without locking. On other hand JPA provides locking whole table row in through database row locking mechanism (typically) when pessimistic mode is enabled. For more info look at link
Related
I heard that SQL Server SELECT statements causing blocking.
So I have MVC application with EF and SQL Server 2008 and it shares DB with another application which is very frequently writes some data. And MVC application generates some real-time reports based that data which comes from another application.
So given that scenario is it possible that while generating a report it will block some tables where another application will try to write data?
I tried to make some manual inserts and updates while report is generated and it handled fine. Am I misunderstood something?
This is one of the reasons why in Entity Framework 6 for Sql Server a default in database creation has changed:
EF is now aligned with a “best practice” for SQL Server databases, which is to configure the database’s READ_COMMITTED_SNAPSHOT setting to ON. This means that, by default, the database will create a snapshot of itself every time a change is made. Queries will be performed on the snapshot while updates are performed on the actual database.
So with databases created by EF 5 and lower, READ_COMMITTED_SNAPSHOT is OFF which means that
the Database Engine uses shared locks to prevent other transactions from modifying rows while the current transaction is running a read operation.
Of course you can always change the setting yourself:
ALTER DATABASE MyDb SET READ_COMMITTED_SNAPSHOT ON
READ UNCOMMITTED should help, because it doesn't issue shared locks on data that are being retrieved. So, it doesn't bother your other application that intensively updates the data. Another option is to use SNAPSHOT isolation leven on your long-running SELECT. This approach preserves data integrity for selected data, but for a cost of higher CPU, and more intense tempdb usage.
I have a big SQL Server 2008 R2 database with many rows that are updated constantly. Updating is done by a back end service application that calls stored procedures. Within one of those stored procedures there is a SQL cursor that recalculates and updates data. This all runs fine.
But, our frontend web application needs to search through these rows and this search sometimes results in a
Lock request time out period exceeded. at
Telerik.OpenAccess.RT.Adonet2Generic.Impl.PreparedStatementImp.executeQuery()..
After doing some research I have found that the best way to make this query to run without problems is to make it run with "read uncommitted isolation level". I've found that this setting can be made in the Telerik OpenAccess settings, but that's a setting that affects the complete database ORM project. That's not what I want! I want this level for this query only.
Is there a way to make this specific LINQ query to run in this uncommitted isolation level?
Or can we make this one query to use a WITH NOLOCK hint?
Use
SET LOCK_TIMEOUT -1
in the beginning of your query.
See the reference manual
Runnung the queries in read uncommitted isolation level (and using NOLOCK hint) can cause many strange problems, you have to clearly understand why do you do this and how it can interfere with your dataflow
We're using NHibernate with Memcache as the second level cache. Occasionally there is a need for more advanced queries or bulk query operations. From the book Nhibernate in Action they recommend the following:
"It’s our view that ORM isn’t suitable for mass-update (or mass-delete) operations. If
you have a use case like this, a different strategy is almost always better: call a stored
procedure in the database, or use direct SQL UPDATE and DELETE statements for that
particular use case."
My concern is that queries against the underlying database do not reflect in the cache (at least until cache expiry) and I was wondering if anyone has come up with any effective strategies for mixing and matching NHibernate with custom SQL statements?
Is there any way of getting say a bulk Update statement (executed with custom sql) to reflect in the second level cache? I am aware of being able to manually evict, but this removes the items from cache and thefore increases hits on the database.
Does the community have any solutions that have been found to be effective in dealing with this problem?
As far as I know there is no method to keep the 2nd level cache up to date with massupdates. But you can partially evict the cache as described in: http://www.nhforge.org/doc/nh/en/index.html#performance-sessioncache.
Well, I think my question says it all. I need to know if Groovy SQL supports two phase commits. I'm actually programming a Grails Service where I want to define a method which does the following:
Get SQL instance for Database 1,
Get SQL instance for Databsae 2,
Open a transaction some how:
Within the transaction call two different stored procedures on each database respectively.
Then commit some how or rollback on both connection if needed.
I didn't find any useful information yet about this anywhere on the web.
I've to program two phase commits any way, so even if this is supported by some other means (e.g. getting help from spring artifacts and use them in grails), please guide me. This has become a show stopper for me at the moment.
Note: I'm using MySQL and mysql-connector driver.
Thanks,
Alam Sher
The current version of MySQL seems to support two-phase commits as long as you're using the INNODB storage engine. There are other restrictions.
MySQL reference for two-phase commit
Groovy added "transaction support" in 1.7, but I'm not certain what they mean by that.
Consider a regular web application doing mostly form-based CRUD operations over SQL database. Should there be explicit transaction management in such web application? Or should it simply use autocommit mode? And if doing transactions, is "transaction per request" sufficient?
I would only use explicit transactions when you're doing things that are actually transactional, e.g., issuing several SQL commands that are highly interrelated. I guess the classic example of this is a banking application -- withdrawing money from one account and depositing it in another account must always succeeed or fail as a batch, otherwise someone gets ripped off!
We use transactions on SO, but only sparingly. Most of our database updates are standalone and atomic. Very few have the properties of the banking example above.
I strongly recommend using transaction mode to safe data integrity because autocommit mode can cause partial data saving.
This is usually handled for me at the database interface layer - The web application rarely calls multiple stored procedures within a transaction. It usually calls a single stored procedure which manages the entire transaction, so the web application only needs to worry about whether it fails.
Usually the web application is not allowed access to other things (tables, views, internal stored procedures) which could allow the database to be in an invalid state if they were attempted without being wrapped in a transaction initiated at the connection level by the client prior to their calls.
There are exceptions to this where a transaction is initiated by the web application, but they are generally few and far between.
You should use transactions given that different users will be hitting the database at the same time. I would recommend you do not use autocommit. Use explicit transaction brackets. As to the resolution of each transaction, you should bracket a particular unit of work (whatever that means in your context).
You might also want to look into the different transaction isolation levels that your SQL database supports. They will offer a range of behaviours in terms of what reading users see of partially updated records.
It depends on how is the CRUD handling done, if and only if all creations and modifications of model instances is made in a single update or insert query, you can use autocommit.
If you are dealing with CRUD in multiple queries mode (a bad idea, IMO) then you certainly should define transactions explicitly, as these queries would certainly be 'transactionally related', you won't want to end with a half model in your database. This is relevant because some web frameworks tend to do things the 'multiple query' way for various reasons.
As for which transaction mode to use it depends on what you can support in terms of data views (ie, how current the data needs to be when seen by clients) and what you'll have to support in terms of performance.
it is better to insert/update into multiple tables in a single stored procedure. That way, there is no need to manage transactions from the web application.