I heard that SQL Server SELECT statements causing blocking.
So I have MVC application with EF and SQL Server 2008 and it shares DB with another application which is very frequently writes some data. And MVC application generates some real-time reports based that data which comes from another application.
So given that scenario is it possible that while generating a report it will block some tables where another application will try to write data?
I tried to make some manual inserts and updates while report is generated and it handled fine. Am I misunderstood something?
This is one of the reasons why in Entity Framework 6 for Sql Server a default in database creation has changed:
EF is now aligned with a “best practice” for SQL Server databases, which is to configure the database’s READ_COMMITTED_SNAPSHOT setting to ON. This means that, by default, the database will create a snapshot of itself every time a change is made. Queries will be performed on the snapshot while updates are performed on the actual database.
So with databases created by EF 5 and lower, READ_COMMITTED_SNAPSHOT is OFF which means that
the Database Engine uses shared locks to prevent other transactions from modifying rows while the current transaction is running a read operation.
Of course you can always change the setting yourself:
ALTER DATABASE MyDb SET READ_COMMITTED_SNAPSHOT ON
READ UNCOMMITTED should help, because it doesn't issue shared locks on data that are being retrieved. So, it doesn't bother your other application that intensively updates the data. Another option is to use SNAPSHOT isolation leven on your long-running SELECT. This approach preserves data integrity for selected data, but for a cost of higher CPU, and more intense tempdb usage.
Related
I'm working on automated tests for particular web app. It uses database to persist data (SQL Server in this case).
In our automated tests we perform several database changes (inserts, updates). And after tests have been executed we want to restore database to original state.
Steps will be something like this:
Create somehow backup
Execute tests
Restore data from backup
The first version was pretty simple - create table backup and then restore it. But we've encountered an issue with references integrity.
After that we decided to use full database backup, but I don't like this idea.
Also we were thinking that we can track all references and backup only needed tables not a whole database.
Last thoughts was about somehow logging our actions (inserts, updates) and then perform reverse actions (deletes for inserts, updates with old data for updates), but it looks kinda complicated.
May be there is another solution?
Actually, there is no need to restore the database in native SQL Server terms, nor to track the changes and then revert them back
You can use ApexSQL Restore – a SQL Server tool that attaches both native and natively compressed SQL database backups and transaction log backups as live databases, accessible via SQL Server Management Studio, Visual Studio or any other third-party tool. It allows attaching single or multiple full, differential and transaction log backups
For more information on how to use the tool in your scenario check the Using SQL database backups instead of live databases in a large development team online article
Disclaimer: I work as a Product Support Engineer at ApexSQL
If you wanting to do minimal changes to the database with Insert and Update, it is much better alternative to do those changes within transactions which can be rolled back at the end of the test. This way SQL server will automatically store information in regards what you changes and will reverse it back to state before test began.
BEGIN TRANSACTION http://technet.microsoft.com/en-us/library/ms188929.aspx
I think the better idea is to create test database.
You can also create Interface for methods, first implementation for real data (with real db) and the second one for test db.
You can create a database snapshot.
The snapshot will keep track of all changed data pages that are changed during your test. Once you are done, you can restore back form the snapshot to the previous state.
CREATE DATABASE [test_snaphot1] ON
( NAME = test, FILENAME =
'e:\SQLServer\Data\test_snapshot1.ss' )
AS SNAPSHOT OF [test];
GO
--do all your tests
RESTORE DATABASE [test] from
DATABASE_SNAPSHOT = 'test_snaphot1';
GO
You have to create a snapshot file for each datafile of your database. So if you have a database with 4 data files, then your snapshot syntax should include 4 snapshot files.
Found simple solution - renaming table.
Algorithm is pretty simple:
Rename table to another table, e.g. "table" to "table-backup" (references will refer to that backup table)
Create "table" from "table-backup" ("table" will not have any dependencies)
Perform any actions in the application
Drop "table" with dirty data (will not break references integrity)
Rename "table-backup" to "table" (references integrity will be kept).
Thanks
We have and old database with a poorly thought out table structure, virtually no relationships setup and no naming schemes. I've created a new database with a clean relational data structure that implements proper design practices.
I'm looking for advice on different methods to migrate the old data over to the new format. This will require a lot of data re-shaping which won't be fun. The data is heavily accessed and the challenge will be to keep both databases in sync for all relevant data (accounts, important services etc).
I thought triggers might be the way to go here - but maybe there is a different method that I am unaware of (maybe MS Sync Framework, or a code-level data adapter which will be more work because there is so much data access code spread all over the place, classic ASP and .Net over dozens of projects). The database in question is SQL Server 2005, running in SQL Server 2000 compatibility mode.
I think the way to go is to write a stored procedure in the new database, which will actually pull your delta changes (only the modifications that were done from the last run to the instant the stored proc is run), and put this stored procedure in the sql agent job.
Configure the sql agent job to run for every 15 minutes and let the data sync in.
disadvantages of using triggers in this scenario
triggers will reduce the performance, as the sql server will execute the trigger code as well along with the update/ insert /delete statements and includes these as part of the execution at every time, i.e. if your trigger code takes 2 seconds to execute and the update statement with no trigger takes 2 seconds to execute, then the update time will be increased to 4 seconds with trigger in place. So employing triggers in this case might result in huge performance bottle neck.
I'm dealing with the same situation at my work, and I'm currently writing an application to do the migration. The original database has no established relationships, so it's really like a set of disconnected spreadsheets. By building my own application, I'm able to migrate the data using newly-established foreign keys, and assign data-specific defaults in place of nulls.
I have a big SQL Server 2008 R2 database with many rows that are updated constantly. Updating is done by a back end service application that calls stored procedures. Within one of those stored procedures there is a SQL cursor that recalculates and updates data. This all runs fine.
But, our frontend web application needs to search through these rows and this search sometimes results in a
Lock request time out period exceeded. at
Telerik.OpenAccess.RT.Adonet2Generic.Impl.PreparedStatementImp.executeQuery()..
After doing some research I have found that the best way to make this query to run without problems is to make it run with "read uncommitted isolation level". I've found that this setting can be made in the Telerik OpenAccess settings, but that's a setting that affects the complete database ORM project. That's not what I want! I want this level for this query only.
Is there a way to make this specific LINQ query to run in this uncommitted isolation level?
Or can we make this one query to use a WITH NOLOCK hint?
Use
SET LOCK_TIMEOUT -1
in the beginning of your query.
See the reference manual
Runnung the queries in read uncommitted isolation level (and using NOLOCK hint) can cause many strange problems, you have to clearly understand why do you do this and how it can interfere with your dataflow
I'm developing an ETL application with batch processing. There is low (i.e. no) concurrency for updates. I'd like to avoid the overhead of granular locks and lock escalation by merely locking the entire table.
I'd like to avoid having to specify TABLOCK in every statement. Is there any way to set the locking granularity at the top of a stored procedure such that every statement automatically gets table locks on every table used? Shared or exclusive doesn't matter, though shared is preferred; the ETL will run overnight with no adhoc query load and prior to a batch of reports triggered when the ETL is complete.
Thanks!
You will need to take a look at Transaction Isolation Levels
To be honest though, I can't see why you need to be doing anything. I would have thought SQL Server would do a good enough job of locking by itself.
I have a database on ms sql 2000 that is being hit by hundreds of users at a time. There are intense reports using reporting services 2005 hitting the same database.
When there are lots of reports running and people using the database concurrently we see blocking processes to the level that the system starts to give time out to any transaction made after some time in that situation.
Is there a global way of minimize blocking so the transaction can continue to flow.
Use optimistic locking, if updates are not happening often and the database is mainly used for reporting.
SQL Server has quite a pessimistic locking default.
A look into SQL Server Table Hints might get you started.
The reports can use WITH(NOLOCK).
Other possibilities are having the reports run off a read-only replica of the database or running off a datawarehouse version of the database which is optimized for the reporting needs.
Since you are already using NOLOCK hints and READ UNCOMMITTED isolation level for your reports, the investigation needs to turn to the transactional queries coming in. This may get deep. Perhaps applications are keeping transactions open too long. It may also be the case that you have a lot of table scans or range scans in some of the other query volume, and those may be holding shared locks for long-running transactions. Those shared locks will block your writers.
You need to start looking at sp_lock, and seeing what kinds of locks are outstanding, see what locks the blocked queries are trying to obtain, and then examine the queries that are blocking the requestors.
This will help you if you are unfamiliar with SQL Server locking:
Understanding SQL Server 2000 Locking
Also, perhaps you could describe your disk subsystem. It may be undersized.
Thanks everyone for your support. What we do to mitigate the problem was to create a new database whit a logshipping procedure every hour to mantain in sync to the real one. The reports that do no need real time data where point to that database and the ones that needs real time data where restricted so only a few people can access them. The drawbacks whit the method is tha the data will be up to one hour out of sync and we need to create a new server for that purpose only. Also when the loggshipping procedure runs every connetion is drop for a very short period of time but it can be a problem to really long procedures or reports. After this I will verify the querys from the reports so I can understand what can be optimize. Thanks and I will recomend the site to the whole IT department.