Setting Locking Granularity for All Statements in a Stored Procedure - Microsoft SQL Server 2000 - sql-server-2000

I'm developing an ETL application with batch processing. There is low (i.e. no) concurrency for updates. I'd like to avoid the overhead of granular locks and lock escalation by merely locking the entire table.
I'd like to avoid having to specify TABLOCK in every statement. Is there any way to set the locking granularity at the top of a stored procedure such that every statement automatically gets table locks on every table used? Shared or exclusive doesn't matter, though shared is preferred; the ETL will run overnight with no adhoc query load and prior to a batch of reports triggered when the ETL is complete.
Thanks!

You will need to take a look at Transaction Isolation Levels
To be honest though, I can't see why you need to be doing anything. I would have thought SQL Server would do a good enough job of locking by itself.

Related

MS SQL Server monitoring table locks

I would like to ask whether there's some kind of a mechanism that would allow me to monitor, capture and write (e.g. to a table) information about table locks that occured.
What I found so far are few stored procedures that can be executed manually or periodically, what I actually need is to be able to catch them before/within the insert (more likely update) and save to another table.
I ask this question, because I have a stored procedure to update data in one of the tables from my database and normally it is executed between 30 and 50ms. But once a time weird thing is happening and it takes up to 90s to execute the same procedure, using the same data and same server cofiguration.
Should I use a trigger for this? If yes, what trigger should I set it up at? I only need table locks for a certain table. Since from what I understand, shared lock will not allow inserts / updates and therefore I need all kind of locks to be captured.
Thanks in advance

Can entity framework select block the table?

I heard that SQL Server SELECT statements causing blocking.
So I have MVC application with EF and SQL Server 2008 and it shares DB with another application which is very frequently writes some data. And MVC application generates some real-time reports based that data which comes from another application.
So given that scenario is it possible that while generating a report it will block some tables where another application will try to write data?
I tried to make some manual inserts and updates while report is generated and it handled fine. Am I misunderstood something?
This is one of the reasons why in Entity Framework 6 for Sql Server a default in database creation has changed:
EF is now aligned with a “best practice” for SQL Server databases, which is to configure the database’s READ_COMMITTED_SNAPSHOT setting to ON. This means that, by default, the database will create a snapshot of itself every time a change is made. Queries will be performed on the snapshot while updates are performed on the actual database.
So with databases created by EF 5 and lower, READ_COMMITTED_SNAPSHOT is OFF which means that
the Database Engine uses shared locks to prevent other transactions from modifying rows while the current transaction is running a read operation.
Of course you can always change the setting yourself:
ALTER DATABASE MyDb SET READ_COMMITTED_SNAPSHOT ON
READ UNCOMMITTED should help, because it doesn't issue shared locks on data that are being retrieved. So, it doesn't bother your other application that intensively updates the data. Another option is to use SNAPSHOT isolation leven on your long-running SELECT. This approach preserves data integrity for selected data, but for a cost of higher CPU, and more intense tempdb usage.

Lock request time out period exceeded - Telerik OpenAccess ORM

I have a big SQL Server 2008 R2 database with many rows that are updated constantly. Updating is done by a back end service application that calls stored procedures. Within one of those stored procedures there is a SQL cursor that recalculates and updates data. This all runs fine.
But, our frontend web application needs to search through these rows and this search sometimes results in a
Lock request time out period exceeded. at
Telerik.OpenAccess.RT.Adonet2Generic.Impl.PreparedStatementImp.executeQuery()..
After doing some research I have found that the best way to make this query to run without problems is to make it run with "read uncommitted isolation level". I've found that this setting can be made in the Telerik OpenAccess settings, but that's a setting that affects the complete database ORM project. That's not what I want! I want this level for this query only.
Is there a way to make this specific LINQ query to run in this uncommitted isolation level?
Or can we make this one query to use a WITH NOLOCK hint?
Use
SET LOCK_TIMEOUT -1
in the beginning of your query.
See the reference manual
Runnung the queries in read uncommitted isolation level (and using NOLOCK hint) can cause many strange problems, you have to clearly understand why do you do this and how it can interfere with your dataflow

Is it possible to run multiple DDL statements inside a transaction (within SQL Server)?

I'm wondering if it is possible to run multiple DDL statements inside a transaction. I'm specially interested on SQL Server, even though answers with other databases (Oracle, PostgreSQL at least) could also be interesting.
I've been doing some "CREATE TABLE" and "CREATE VIEW" for the created table inside a transaction and there seems to be some inconsistencies and I'm wondering if the DDLs shouldn't be done inside the transaction...
I could probably move the DDL outside the transaction but
I'd like to get some reference for this. What I have found this far:
MSDN page Isolation Levels in the Database Engine tells clearly that there are restrictions on what DDL operations can be performed in an explicit transaction that is running under snapshot isolation - but I'm not using snapshot isolation and this should result as an error.
This could be interpreted so that DDL operations can be performend in an explicit transaction under different isolation levels?
Oracle® Database Gateway for SQL Server User's Guide#DDL Statements states that only one DDL statement can be executed in a given transaction - is this valid also for SQL Server used straight?
For Oracle:
Within SO question Unit testing DDL statements that need to be in a transaction it is said that Oracle does implicit commit for a DDL statement? (even though no references)
If it matters something, I'm doing this with Java through the JTDS JDBC driver.
b.r. Touko
I know most databases have restrictions, but Postgres doesn't. You can run any number table creations, column changes and index changes in a transaction, and the changes aren't visible to other users unit COMMIT succeeds. That's how databases should be! :-)
As for SQL Server you can run DDL inside of a transaction, but SQL Server does not version metadata, and so changes would be visible to others before the transaction commits. But some DDL statements can be rolled back if you are in a transaction, but for which ones work and which ones don't you'll need to run some tests.
If you are creating tables, views, etc on the fly (other than table variables or temp tables), you may truly need to rethink your design. This is not stuff that should normally happen from the user interface. Even if you must allow some customization, the DDL statements should not be happening at the same time as running transactional inserts/updates/deletes. It is far better to separate these functions.
This is also something that needs a healthy dose of consideration and testing as to what happens when two users try to change the structure of the same table at the same time and then run a transaction to insert data. There's some truly scary stuff that can happen when you allow users to make adjustments to your database structure.
Also some DDL statements must always be the first statement of a batch. Look out for that too when you are running them.
For the general case and IIRC, it's not safe to assume DDL statements are transactional.
That is to say, there is a great deal of leeway on how schema alterations interact within a transaction (assuming it does at all). This can be by vendor or even by the particular installation (i.e., up to the dba) I believe. So at the very least, don't use one DBMS to assume that others will treat DDL statements the say.
Edit: MySql is an example of a DBMS which doesn't support DDL transactions at all. Also, if you have database replication/mirroring you have to be very careful that the replication service (Sybase's replication is the norm, believe it or not) will actually replicate the DDL statement.
Could it be that in MS SQL, Implicit transactions are triggered when DDL and DML statements are run. If you toggle this off does this help, use
SET IMPLICIT_TRANSACTIONS
EDIT: another possibility
- You can't combine CREATE VIEW with other statements in the same batch. CREATE TABLE is ok.
You separate batches with GO.
EDIT2: You CAN use multiple DDL in a transaction as long as separated with GO to create different batches.

MS SQL Concurrency, excess Locks

I have a database on ms sql 2000 that is being hit by hundreds of users at a time. There are intense reports using reporting services 2005 hitting the same database.
When there are lots of reports running and people using the database concurrently we see blocking processes to the level that the system starts to give time out to any transaction made after some time in that situation.
Is there a global way of minimize blocking so the transaction can continue to flow.
Use optimistic locking, if updates are not happening often and the database is mainly used for reporting.
SQL Server has quite a pessimistic locking default.
A look into SQL Server Table Hints might get you started.
The reports can use WITH(NOLOCK).
Other possibilities are having the reports run off a read-only replica of the database or running off a datawarehouse version of the database which is optimized for the reporting needs.
Since you are already using NOLOCK hints and READ UNCOMMITTED isolation level for your reports, the investigation needs to turn to the transactional queries coming in. This may get deep. Perhaps applications are keeping transactions open too long. It may also be the case that you have a lot of table scans or range scans in some of the other query volume, and those may be holding shared locks for long-running transactions. Those shared locks will block your writers.
You need to start looking at sp_lock, and seeing what kinds of locks are outstanding, see what locks the blocked queries are trying to obtain, and then examine the queries that are blocking the requestors.
This will help you if you are unfamiliar with SQL Server locking:
Understanding SQL Server 2000 Locking
Also, perhaps you could describe your disk subsystem. It may be undersized.
Thanks everyone for your support. What we do to mitigate the problem was to create a new database whit a logshipping procedure every hour to mantain in sync to the real one. The reports that do no need real time data where point to that database and the ones that needs real time data where restricted so only a few people can access them. The drawbacks whit the method is tha the data will be up to one hour out of sync and we need to create a new server for that purpose only. Also when the loggshipping procedure runs every connetion is drop for a very short period of time but it can be a problem to really long procedures or reports. After this I will verify the querys from the reports so I can understand what can be optimize. Thanks and I will recomend the site to the whole IT department.